Airbnb is a two-sided marketplace, bringing together hosts who own listings for rent, with prospective guests from around the globe. Applying neural network-based learning to rank techniques has led to significant improvements in matching guests with hosts. These improvements in ranking were driven by a core strategy: order the listings by their estimated booking probabilities, then iterate on techniques to make these booking probability estimates more and more accurate. Embedded implicitly in this strategy was an assumption that the booking probability of a listing could be determined independently of other listings in search results. In this paper we discuss how this assumption, pervasive throughout the commonly-used learning to rank frameworks, is false. We provide a theoretical foundation correcting this assumption, followed by efficient neural network architectures based on the theory. Explicitly accounting for possible similarities between listings, and reducing them to diversify the search results generated strong positive impact. We discuss these metric wins as part of the online A/B tests of the theory. Our method provides a practical way to diversify search results for large-scale production ranking systems.
translated by 谷歌翻译
This paper presents the work of restoring punctuation for ASR transcripts generated by multilingual ASR systems. The focus languages are English, Mandarin, and Malay which are three of the most popular languages in Singapore. To the best of our knowledge, this is the first system that can tackle punctuation restoration for these three languages simultaneously. Traditional approaches usually treat the task as a sequential labeling task, however, this work adopts a slot-filling approach that predicts the presence and type of punctuation marks at each word boundary. The approach is similar to the Masked-Language Model approach employed during the pre-training stages of BERT, but instead of predicting the masked word, our model predicts masked punctuation. Additionally, we find that using Jieba1 instead of only using the built-in SentencePiece tokenizer of XLM-R can significantly improve the performance of punctuating Mandarin transcripts. Experimental results on English and Mandarin IWSLT2022 datasets and Malay News show that the proposed approach achieved state-of-the-art results for Mandarin with 73.8% F1-score while maintaining a reasonable F1-score for English and Malay, i.e. 74.7% and 78% respectively. Our source code that allows reproducing the results and building a simple web-based application for demonstration purposes is available on Github.
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) are extensively utilized for deep learning on graphs. The large data sizes of graphs and their vertex features make scalable training algorithms and distributed memory systems necessary. Since the convolution operation on graphs induces irregular memory access patterns, designing a memory- and communication-efficient parallel algorithm for GCN training poses unique challenges. We propose a highly parallel training algorithm that scales to large processor counts. In our solution, the large adjacency and vertex-feature matrices are partitioned among processors. We exploit the vertex-partitioning of the graph to use non-blocking point-to-point communication operations between processors for better scalability. To further minimize the parallelization overheads, we introduce a sparse matrix partitioning scheme based on a hypergraph partitioning model for full-batch training. We also propose a novel stochastic hypergraph model to encode the expected communication volume in mini-batch training. We show the merits of the hypergraph model, previously unexplored for GCN training, over the standard graph partitioning model which does not accurately encode the communication costs. Experiments performed on real-world graph datasets demonstrate that the proposed algorithms achieve considerable speedups over alternative solutions. The optimizations achieved on communication costs become even more pronounced at high scalability with many processors. The performance benefits are preserved in deeper GCNs having more layers as well as on billion-scale graphs.
translated by 谷歌翻译
中国人在马来群岛各国的中国社区中突出特征。在这些国家,中国人经历了对当地语言和文化的调整过程,这导致每个国家发生中国变体。在本文中,我们对从五个马来群岛国家收集的中国新闻文本进行了定量分析看法。统计结果表明,这五个国家中使用的中国变体与现代中国大陆同行不同。同时,我们设法提取并分类了每个国家使用的几个中文单词。所有这些差异反映了中国人如何在海外发展,并证明了ROM当地社会和文化对中国发展的深远影响。
translated by 谷歌翻译
尽管机器学习和基于排名的系统在广泛用于敏感决策过程(例如,确定职位候选者,分配信用评分)时,他们对成果的意外偏见充满了疑虑,这使算法公平(例如,人口统计学公平)平等,机会平等)的目标。 “算法追索”提供了可行的恢复动作,通过修改属性来改变不良结果。我们介绍了排名级别的追索权公平的概念,并开发了一个“追索意识的排名”解决方案,该解决方案满足了排名的追索公平约束,同时最大程度地减少了建议的修改成本。我们的解决方案建议干预措施可以重新排序数据库记录的排名列表并减轻组级别的不公平性;具体而言,子组的不成比例表示和追索权成本不平衡。此重新排列可确定对数据点的最小修改,这些属性修改根据其易于解决方案进行了加权。然后,我们提出了一个有效的基于块的扩展,该扩展可以在任何粒度上重新排序(例如,银行贷款利率的多个括号,搜索引擎结果的多页)。对真实数据集的评估表明,尽管现有方法甚至可能加剧诉求不公平,但我们的解决方案 - raguel-可以显着改善追索性的公平性。 Raguel通过反事实生成和重新排列的结合过程优于改善追索性公平的替代方案,同时对大型数据集保持了有效的效率。
translated by 谷歌翻译
模仿学习在有效地学习政策方面对复杂的决策问题有着巨大的希望。当前的最新算法经常使用逆增强学习(IRL),在给定一组专家演示的情况下,代理会替代奖励功能和相关的最佳策略。但是,这种IRL方法通常需要在复杂控制问题上进行实质性的在线互动。在这项工作中,我们提出了正规化的最佳运输(ROT),这是一种新的模仿学习算法,基于最佳基于最佳运输轨迹匹配的最新进展。我们的主要技术见解是,即使只有少量演示,即使只有少量演示,也可以自适应地将轨迹匹配的奖励与行为克隆相结合。我们对横跨DeepMind Control Suite,OpenAI Robotics和Meta-World基准的20个视觉控制任务进行的实验表明,与先前最新的方法相比,平均仿真达到了90%的专家绩效的速度,达到了90%的专家性能。 。在现实世界的机器人操作中,只有一次演示和一个小时的在线培训,ROT在14个任务中的平均成功率为90.1%。
translated by 谷歌翻译
异构信息网络(HIN)捕获各种实体之间的复杂关系,并已广泛用于提高各种数据挖掘任务的有效性,例如在推荐系统中。许多现有的文欣推荐算法利用手工制作的元路径来提取来自网络的语义信息。这些算法依赖于广泛的域知识,可以选择最佳的元路径集。对于HIN与众多节点和链路类型高度复杂的应用程序,手工制作方法的方法太繁琐,并且容易出错。为了解决这个问题,我们提出了基于加强学习的元路径选择(RMS)框架,以选择有效的元路径,并将它们包含在现有的基于元路径的推荐中。为了识别高质量的元路径,RMS列举了基于加强学习(RL)的策略网络(代理),从而从下游推荐任务的性能获取奖励。我们设计一个基于HIN的推荐模型,HREC,有效地使用元路径信息。我们将HREC与RMS进行了整合并导出了我们的推荐解决方案,RMS-HREC,它自动使用有效的元路径。实验对实时数据集表明,我们的算法通过自动捕获重要元路径,可以显着提高推荐模型的性能。
translated by 谷歌翻译
因果检测在自然语言处理和语言学研究领域吸引了很多关注。它具有信息检索,事件预测,问题回答,财务分析和市场研究的基本应用。在本研究中,我们探讨了几种方法来使用变压器识别和提取金融文件中的原因对。为此目的,我们提出了一种与BIO方案结合POS标记的方法,可以与现代变压器模型集成,以解决识别给定文本中的因果关系的这一挑战。我们的最佳方法学达到0.9551的F1分,在Fincausal-2021在Fincausal-2021研讨会上的盲试验中精确匹配得分为0.8777。
translated by 谷歌翻译